class: center, middle, inverse, title-slide .title[ #
Introduction to Bayesian Data Analysis
] .subtitle[ ## Lecture 1 ] .author[ ###
Julia Haaf ] .date[ ###
Summer 2025
] --- exclude: true ``` r library("knitr") library(kableExtra) options(htmltools.dir.version = FALSE) opts_chunk$set(echo = FALSE, fig.align = "center") # remotes::install_github("gadenbuie/xaringanExtra") library("xaringanExtra") library("xaringanthemer") my_colors <- c("#495e8c", "#ef9ada") library("ggplot2") library(diagram) ``` ``` ## Loading required package: shape ``` ``` r library(tidyverse) ``` ``` ## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ── ## ✔ dplyr 1.1.4 ✔ readr 2.1.5 ## ✔ forcats 1.0.0 ✔ stringr 1.5.1 ## ✔ lubridate 1.9.4 ✔ tibble 3.2.1 ## ✔ purrr 1.0.2 ✔ tidyr 1.3.1 ``` ``` ## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ── ## ✖ dplyr::filter() masks stats::filter() ## ✖ dplyr::group_rows() masks kableExtra::group_rows() ## ✖ dplyr::lag() masks stats::lag() ## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors ``` ``` r library(brms) ``` ``` ## Loading required package: Rcpp ## Loading 'brms' package (version 2.22.0). Useful instructions ## can be found by typing help('brms'). A more detailed introduction ## to the package is available through vignette('brms_overview'). ## ## Attaching package: 'brms' ## ## The following object is masked from 'package:stats': ## ## ar ``` <script src="https://cdn.jsdelivr.net/npm/medium-zoom@1.0.6/dist/medium-zoom.js"></script> <script type="module"> import mediumZoom from 'https://cdn.jsdelivr.net/npm/medium-zoom@1.0.6/dist/medium-zoom.esm.js' const zoomDefault = mediumZoom('#zoom-default') const zoomMargin = mediumZoom('#zoom-margin', { margin: 45 }) </script> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ "HTML-CSS": { scale: 150, } }); </script> --- # Who are we? .pull-right-30[ <img src="data:image/png;base64,#src/nicole_c.png" width="70%" style="display: block; margin: auto;" /> <img src="data:image/png;base64,#src/JuliaH_01.jpg" width="70%" style="display: block; margin: auto;" /> ] - Dr. Nicole Cruz + Postdoc, University of Potsdam <br> <br> - Dr. Julia Haaf + Professor, University of Potsdam --- class: middle, center <img src="data:image/png;base64,#src/who.png" width="30%" style="display: block; margin: auto;" /> # Who are you? --- ### Overview over the workshop <img src="data:image/png;base64,#../../schedule.png" width="100%" style="display: block; margin: auto;" /> --- # Overview 1. Statistical Modeling 2. Bayesian Statistics + An Example + Prior + Bayes's Theorem + Posterior + Estimation in `R` --- layout:false ### Inferential Statistics <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-6-1.png" style="display: block; margin: auto;" /> --- layout: true ### Statistical Models --- .content-box-blue[ .small[ A statistical model is the mathematical representation of a series of statistical assumptions and relationships. Statistical models contain information about the generation of sample data from the population. ] ] -- - Mathematical relationships between random variables and other non-random variables. -- - Statistical model as “formal representation of a theory”. -- - Statistical models represent the process of data generation. --- Statistical model as assumptions about the population -- `\(\rightarrow\)` Statistical Model = Probability Distribution --- #### Example: A single subject pressing a button repeatedly (Chapter 3.2.1) - Finger tapping task (for a review, see Hubel et al. 2013) - Procedure -- + blank screen (200 ms) -- + cross in the middle of a screen + as soon as they see the cross, they tap on the space bar as fast as they can until the experiment is over (361 trials). -- - Dependent measure: Finger tapping times in milliseconds. -- - Research question is: how long does it take for this particular subject to press a key? --- #### Example: A single subject pressing a button repeatedly (Chapter 3.2.1) <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-7-1.png" style="display: block; margin: auto;" /> -- - **Modell:** `\(y \sim \text{Normal}(\mu, \sigma^2)\)` --- class:inverse, center, middle layout:false # Bayesian Statistics --- layout:true ### An Example --- .pull-right-45[ <img src="data:image/png;base64,#src/frank5.jpeg" width="85%" style="display: block; margin: auto;" /> ] - This is Frank -- - Frank enjoys eating, but he is somewhat picky -- - We want to model the likelihood that Frank eats his food. --- layout:true .pull-right-25[ <img src="data:image/png;base64,#src/frank5.jpeg" width="95%" style="display: block; margin: auto;" /> ] ### An Example --- #### The Study - For 20 days (i.e., 40 meals), we observe whether Frank eats his food - Result: `\(x\)` of 40 meals were eaten. -- #### The Statistical Model? -- - `\(Y \sim \mbox{Binomial}(N, \theta),\)` - `\(0 \leq \theta \leq 1, N = 40\)` --- What is `\(\theta\)`? -- #### Before Knowing the Data -- `\(\theta\)` could be 0.5. <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-10-1.png" style="display: block; margin: auto;" /> --- What is `\(\theta\)`? #### Before Knowing the Data `\(\theta\)` could be 0.9. <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-11-1.png" style="display: block; margin: auto;" /> --- What is `\(\theta\)`? #### Before Knowing the Data `\(\theta\)` could be 0.2. <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-12-1.png" style="display: block; margin: auto;" /> --- What is `\(\theta\)`? #### Before Knowing the Data <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-13-1.png" style="display: block; margin: auto;" /> --- layout:false ### Parameters - In Bayesian Statistics, parameters are also random variables (just like data). -- - This means that parameters in a statistical model also receive a probability distribution. -- - The probability distribution of the parameter changes when we observe data. -- - Before we observe data, the probability distribution of a parameter is called the *prior.* --- layout: false class: inverse, middle, center # Prior --- layout:true ### Prior --- .content-box-blue[ .small[ Definition: The prior distribution is an essential component of Bayesian inference. It represents the information about an uncertain parameter. This information is based on prior knowledge about plausible values of the parameter. ] ] <img src="data:image/png;base64,#src/PutAPriorOnIt.jpg" width="35%" style="display: block; margin: auto;" /> --- class:small-font .pull-right-25[ <img src="data:image/png;base64,#src/PutAPriorOnIt.jpg" width="95%" style="display: block; margin: auto;" /> ] - Appropriate Distribution reflects the nature of the parameter + discrete vs. continuous + range: only positive values, only values between 0 and 1 -- - Knowledge and Uncertainty: The Prior reflects how much prior knowledge about a parameter is available. + The more prior knowledge, the more informed the distribution + i.e., the more possible parameter values are deemed implausible <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-16-1.png" width="450" style="display: block; margin: auto;" /> --- .pull-right-25[ <img src="data:image/png;base64,#src/frank5.jpeg" width="95%" style="display: block; margin: auto;" /> ] - `\(Y \sim \mbox{Binomial}(N, \theta),\)` - `\(0 \leq \theta \leq 1, N = 40\)` - `\(\theta \sim ???\)` -- - Beta distribution: `\(\text{Beta}(\alpha, \beta)\)` <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-18-1.png" width="450" style="display: block; margin: auto;" /> --- layout:false ### The Statistical Model - `\(Y \sim \mbox{Binomial}(40, \theta),\)` - `\(\theta \sim \text{Beta}(4, 4)\)` <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-19-1.png" width="650" style="display: block; margin: auto;" /> --- ### Prediction for Data <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-20-1.png" width="550" style="display: block; margin: auto;" /> -- The prior distribution represents the uncertainty about the actual parameter value in the population. --- ### Observation of Data <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-21-1.png" width="550" style="display: block; margin: auto;" /> What happens when we observe data? `\(\rightarrow x = 32\)` --- layout: false class: inverse, middle, center # From Prior to Posterior --- layout:true ### From Prior to Posterior --- - When we observe data, we can update the distribution of the parameter -- - The Prior distribution then becomes the Posterior distribution, where the new information from the data is integrated. -- <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-22-1.png" width="550" style="display: block; margin: auto;" /> --- .pull-right-55[ <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-23-1.png" width="550" style="display: block; margin: auto;" /> ] What changes? -- **How do we arrive at the Posterior distribution?** --- layout: false class: inverse, middle, center # Bayes's Theorem <img src="data:image/png;base64,#src/Bayes_Theorem.jpg" width="55%" style="display: block; margin: auto;" /> --- layout:true .pull-right-45[ <img src="data:image/png;base64,#src/Bayes_Theorem.jpg" width="65%" style="display: block; margin: auto;" /> ] ### Bayes's Theorem --- - The process by which the Prior distribution becomes the Posterior distribution is fundamental probability theory -- - `\(P(\theta | x) = P(\theta) \frac{P(x | \theta)}{P(x)}\)` -- - The Prior distribution is multiplied by another term to arrive at the Posterior distribution --- .pull-left-45[ .content-box-blue[ `\(P(\theta | x) = P(\theta) \frac{P(x | \theta)}{P(x)}\)` ] ] <br> <br> <br> - `\(P(\theta | x)\)`: Posterior, after knowing the data -- - `\(P(\theta)\)`: Prior, before knowing the data -- - `\(P(x | \theta)\)`: Model of the Data, statistical model given the parameters -- - `\(P(x)\)`: Probability of the Data, Prediction for the data across different parameter values. --- layout:true ### The Posterior Distribution --- - The mathematical process to calculate the Posterior distribution is usually very complex -- - An example: `\(P(x) = \int P(x | \theta) P(\theta) d\theta\)` -- - Therefore, we often use estimation methods that estimate the Posterior distribution (e.g., in R) based on the Prior and Model of the Data --- - Option 1: The Posterior distribution can be calculated mathematically + e.g., the Posterior distribution for the probability parameter that Frank eats his food + `\(\theta | x \sim \text{Beta}(4 + 32, 4 + 8)\)` + More generally: For the Binomial-Beta model, the Posterior is: `\(\theta | x \sim \text{Beta}(a + x, b + (N - x))\)` -- - Option 2: The Posterior distribution can only be estimated + Estimation methods based on *Markov Chain Monte Carlo* estimators --- #### The MCMC process <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-26-1.png" style="display: block; margin: auto;" /> --- #### The MCMC process <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-27-1.png" style="display: block; margin: auto;" /> --- #### The MCMC process <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-28-1.png" style="display: block; margin: auto;" /> --- The Result <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-29-1.png" style="display: block; margin: auto;" /> --- class: medium-font layout:true .pull-right-25[ <img src="data:image/png;base64,#src/frank5.jpeg" width="95%" style="display: block; margin: auto;" /> ] ### Back to Frank --- <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-31-1.png" width="450" style="display: block; margin: auto;" /> - What can we now learn about the probability that Frank eats his food? --- <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-32-1.png" width="450" style="display: block; margin: auto;" /> - What is the mean? `\(\hat{\theta} = 0.75\)` --- <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-33-1.png" width="450" style="display: block; margin: auto;" /> - What is the probability that Frank eats more than 80% of the meals? --- <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-34-1.png" width="450" style="display: block; margin: auto;" /> - What is the probability that Frank eats more than 80% of the meals? + `\(P(\theta > 0.8 | x) = 0.2\)` --- <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-35-1.png" width="450" style="display: block; margin: auto;" /> - Which parameter values can we deem implausible? -- - Credible Interval: 95% of the posterior distribution is between 0.62 and 0.86 --- layout: false class: inverse, middle, center # Bayesian Parameter Estimation in R --- layout:true ### Bayesian Parameter Estimation in R --- In this course, we use the `R`-package `brms`. - `brms` = Bayesian Regression Models using Stan - Created by [Paul-Christian Bürkner](https://github.com/paul-buerkner/brms) - An interface to the [Stan](https://mc-stan.org/) probabilistic programming language + sampling routines - Allows fitting of a huge range of models using `R`'s formula syntax - These slides aim to introduce the main functions --- #### R formula syntax <img src="data:image/png;base64,#src/syntax.png" width="105%" style="display: block; margin: auto;" /> --- #### `brm()` ``` r brm(formula, data, family = gaussian(), prior = NULL, sample_prior = c("no", "yes", "only"), chains = 4, iter = 2000, warmup = floor(iter/2), cores = getOption("mc.cores", 1L)) ``` see `?brm` for more --- class:medium-font #### Binomial Model in `brms` - `\(y_i = 0, 1, 1, 1, 0, ...\)` for the `\(i\)`th meal -- - `\(Y_i \sim \mbox{Bernoulli}(\pi), \;\)` + where `\(\pi = \frac{\exp(\theta)}{\exp(\theta) + 1}\)` (inverse logit function). -- + Prior is placed on `\(\theta\)` instead of `\(\pi\)`: `\(\theta \sim \mbox{Normal}(\mu_\theta, \sigma^2_\theta)\)` -- .pull-left-40[ #### Prior for `\(\theta\)`? `\(\theta = \log \frac{\pi}{1 - \pi}\)` (logit function) ] -- .pull-right-60[ ``` r p_sim <- rbeta(100000, 4, 4) theta_sim <- log(p_sim / (1 - p_sim)) c(mean(theta_sim), sd(theta_sim)) ``` ``` ## [1] -0.0002329545 0.7542324909 ``` ] --- ``` r library(brms) frankdata <- data.frame(y = c(rep(1, 32), rep(0, 8))) fit <- brm(data = frankdata , family = bernoulli(link = "logit") , y ~ 0 + Intercept , prior = c(prior(normal(0, 0.75) , coef = Intercept)) , iter = 2000 , warmup = 700) ``` ``` ## Compiling Stan program... ``` ``` ## Start sampling ``` ``` ## ## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). ## Chain 1: ## Chain 1: Gradient evaluation took 4e-06 seconds ## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. ## Chain 1: Adjust your expectations accordingly! ## Chain 1: ## Chain 1: ## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 1: Iteration: 701 / 2000 [ 35%] (Sampling) ## Chain 1: Iteration: 900 / 2000 [ 45%] (Sampling) ## Chain 1: Iteration: 1100 / 2000 [ 55%] (Sampling) ## Chain 1: Iteration: 1300 / 2000 [ 65%] (Sampling) ## Chain 1: Iteration: 1500 / 2000 [ 75%] (Sampling) ## Chain 1: Iteration: 1700 / 2000 [ 85%] (Sampling) ## Chain 1: Iteration: 1900 / 2000 [ 95%] (Sampling) ## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 1: ## Chain 1: Elapsed Time: 0.003 seconds (Warm-up) ## Chain 1: 0.006 seconds (Sampling) ## Chain 1: 0.009 seconds (Total) ## Chain 1: ## ## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). ## Chain 2: ## Chain 2: Gradient evaluation took 3e-06 seconds ## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. ## Chain 2: Adjust your expectations accordingly! ## Chain 2: ## Chain 2: ## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 2: Iteration: 701 / 2000 [ 35%] (Sampling) ## Chain 2: Iteration: 900 / 2000 [ 45%] (Sampling) ## Chain 2: Iteration: 1100 / 2000 [ 55%] (Sampling) ## Chain 2: Iteration: 1300 / 2000 [ 65%] (Sampling) ## Chain 2: Iteration: 1500 / 2000 [ 75%] (Sampling) ## Chain 2: Iteration: 1700 / 2000 [ 85%] (Sampling) ## Chain 2: Iteration: 1900 / 2000 [ 95%] (Sampling) ## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 2: ## Chain 2: Elapsed Time: 0.003 seconds (Warm-up) ## Chain 2: 0.007 seconds (Sampling) ## Chain 2: 0.01 seconds (Total) ## Chain 2: ## ## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). ## Chain 3: ## Chain 3: Gradient evaluation took 6e-06 seconds ## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. ## Chain 3: Adjust your expectations accordingly! ## Chain 3: ## Chain 3: ## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 3: Iteration: 701 / 2000 [ 35%] (Sampling) ## Chain 3: Iteration: 900 / 2000 [ 45%] (Sampling) ## Chain 3: Iteration: 1100 / 2000 [ 55%] (Sampling) ## Chain 3: Iteration: 1300 / 2000 [ 65%] (Sampling) ## Chain 3: Iteration: 1500 / 2000 [ 75%] (Sampling) ## Chain 3: Iteration: 1700 / 2000 [ 85%] (Sampling) ## Chain 3: Iteration: 1900 / 2000 [ 95%] (Sampling) ## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 3: ## Chain 3: Elapsed Time: 0.003 seconds (Warm-up) ## Chain 3: 0.006 seconds (Sampling) ## Chain 3: 0.009 seconds (Total) ## Chain 3: ## ## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). ## Chain 4: ## Chain 4: Gradient evaluation took 3e-06 seconds ## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. ## Chain 4: Adjust your expectations accordingly! ## Chain 4: ## Chain 4: ## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) ## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) ## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) ## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) ## Chain 4: Iteration: 701 / 2000 [ 35%] (Sampling) ## Chain 4: Iteration: 900 / 2000 [ 45%] (Sampling) ## Chain 4: Iteration: 1100 / 2000 [ 55%] (Sampling) ## Chain 4: Iteration: 1300 / 2000 [ 65%] (Sampling) ## Chain 4: Iteration: 1500 / 2000 [ 75%] (Sampling) ## Chain 4: Iteration: 1700 / 2000 [ 85%] (Sampling) ## Chain 4: Iteration: 1900 / 2000 [ 95%] (Sampling) ## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) ## Chain 4: ## Chain 4: Elapsed Time: 0.003 seconds (Warm-up) ## Chain 4: 0.007 seconds (Sampling) ## Chain 4: 0.01 seconds (Total) ## Chain 4: ``` --- ``` r fit ``` ``` ## Family: bernoulli ## Links: mu = logit ## Formula: y ~ 0 + Intercept ## Data: frankdata (Number of observations: 40) ## Draws: 4 chains, each with iter = 2000; warmup = 700; thin = 1; ## total post-warmup draws = 5200 ## ## Regression Coefficients: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.12 0.33 0.50 1.80 1.00 1860 2335 ## ## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS ## and Tail_ESS are effective sample size measures, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). ``` --- ``` r plot(fit) ``` <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-40-1.png" width="700" style="display: block; margin: auto;" /> --- ``` r post <- as_draws_df(fit) post %>% mutate(pi = exp(b_Intercept) / (1 + exp(b_Intercept))) -> post ggplot(post, aes(pi)) + geom_density() + theme_light(base_size = 16) ``` <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-41-1.png" width="500" style="display: block; margin: auto;" /> --- How much did we learn? <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-42-1.png" width="600" style="display: block; margin: auto;" /> --- ``` r library(ggplot2) bayes_binomial <- function(successes, failures, prior_alpha, prior_beta){ # Parameter of the Posterior aprime <- prior_alpha + successes bprime <- prior_beta + failures # Estimator for theta schaetzer <- aprime / (aprime + bprime) ci <- qbeta(c(0.025, 0.975), aprime, bprime) # Plot cols <- hcl(h = seq(15, 375 , length = 3) , l = 65, c = 100)[1:2] p <- ggplot(data.frame(x = 1), aes(x = x)) + xlim(c(0, 1)) + stat_function(fun = dbeta , args = list(prior_alpha, prior_beta) , geom = "area", alpha = 0.35, aes(fill = 'Prior')) + stat_function(fun = dbeta , args = list(aprime, bprime) , geom = "area", alpha = 0.35, aes(fill = 'Posterior')) + scale_fill_manual(name='Distribution', breaks=c('Prior', 'Posterior'), values=c('Prior' = cols[1], 'Posterior' = cols[2])) + xlab(expression("Parameter" ~ theta)) + ylab("Probability Density") + theme_light(base_size = 14) return(list("estimate" = schaetzer, "ci" = ci, "p" = p)) } ``` --- class:small-code, medium-font .pull-left-45[ ``` r prior1 <- bayes_binomial(successes = 32 , failures = 8 , prior_alpha = 4 , prior_beta = 4) prior1$estimate; prior1$ci ``` ``` ## [1] 0.75 ``` ``` ## [1] 0.6197427 0.8605509 ``` ``` r prior1$p ``` <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-44-1.png" style="display: block; margin: auto;" /> ] -- .pull-right-45[ ``` r prior2 <- bayes_binomial(successes = 32 , failures = 8 , prior_alpha = 3 , prior_beta = 7) prior2$estimate; prior2$ci ``` ``` ## [1] 0.7 ``` ``` ## [1] 0.5673703 0.8174806 ``` ``` r prior2$p ``` <img src="data:image/png;base64,#slides_lecture1_files/figure-html/unnamed-chunk-45-1.png" style="display: block; margin: auto;" /> ] --- layout: false class: inverse, middle, center # Wrap up --- ### Wrap up <img src="data:image/png;base64,#src/BayesianLearningCycle.jpg" width="65%" style="display: block; margin: auto;" /> --- ### Assignments Assignment 1 ### Literature - Nicenboim, Schad, Vasishth, Introduction to Bayesian Data Analysis for Cognitive Science, Chapter 2 - Navarro, Chapter 17 Introduction and 17.1 --- class: inverse, middle, center # :)